Dimensionality reduction of SDPs through sketching
نویسندگان
چکیده
منابع مشابه
Dimensionality reduction of SDPs through sketching
We show how to sketch semidefinite programs (SDPs) using positive maps in order to reduce their dimension. More precisely, we use Johnson-Lindenstrauss transforms to produce a smaller SDP whose solution preserves feasibility or approximates the value of the original problem with high probability. These techniques allow to improve both complexity and storage space requirements. They apply to pro...
متن کاملSketching, Embedding, and Dimensionality Reduction for Information Spaces
Information distances like the Hellinger distance and the Jensen-Shannon divergence have deep roots in information theory and machine learning. They are used extensively in data analysis especially when the objects being compared are high dimensional empirical probability distributions built from data. However, we lack common tools needed to actually use information distances in applications ef...
متن کاملLearning Through Non-linearly Supervised Dimensionality Reduction
Dimensionality reduction is a crucial ingredient of machine learning and data mining, boosting classification accuracy through the isolation of patterns via omission of noise. Nevertheless, recent studies have shown that dimensionality reduction can benefit from label information, via a joint estimation of predictors and target variables from a low-rank representation. In the light of such insp...
متن کاملUnderstanding Protein Flexibility through Dimensionality Reduction
This work shows how to decrease the complexity of modeling flexibility in proteins by reducing the number of dimensions necessary to model important macromolecular motions such as the induced-fit process. Induced fit occurs during the binding of a protein to other proteins, nucleic acids, or small molecules (ligands) and is a critical part of protein function. It is now widely accepted that con...
متن کاملInput Decimation Ensembles: Decorrelation through Dimensionality Reduction
Using an ensemble of classifiers instead of a single classifier has been shown to improve generalization performance in many machine learning problems [4, 16]. However, the extent of such improvement depends greatly on the amount of correlation among the errors of the base classifiers [1, 14]. As such, reducing those correlations while keeping the base classifiers’ performance levels high is a ...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Linear Algebra and its Applications
سال: 2019
ISSN: 0024-3795
DOI: 10.1016/j.laa.2018.11.012